40 research outputs found

    Adaptive Neuro-Filtering Based Visual Servo Control of a Robotic Manipulator

    Get PDF
    This paper focuses on the solutions to flexibly regulate robotic by vision. A new visual servoing technique based on the Kalman filtering (KF) combined neural network (NN) is developed, which need not have any calibration parameters of robotic system. The statistic knowledge of the system noise and observation noise are first given by Gaussian white noise sequences, the nonlinear mapping between robotic vision and motor spaces are then on-line identified using standard Kalman recursive equations. In real robotic workshops, the perfect statistic knowledge of the noise is not easy to be derived, thus an adaptive neuro-filtering approach based on KF is also studied for mapping on-line estimation in this paper. The Kalman recursive equations are improved by a feedforward NN, in which the neural estimator dynamic adjusts its weights to minimize estimation error of robotic vision-motor mapping, without the knowledge of noise variances. Finally, the proposed visual servoing based on adaptive neuro-filtering has been successfully implemented in robotic pose regulation, and the experimental results demonstrate its validity and practicality for a six-degree-of-freedom (DOF) robotic system which the hand-eye without calibrated

    Unsupervised learning of depth estimation, camera motion prediction and dynamic object localization from video

    Get PDF
    Estimating scene depth, predicting camera motion and localizing dynamic objects from monocular videos are fundamental but challenging research topics in computer vision. Deep learning has demonstrated an amazing performance for these tasks recently. This article presents a novel unsupervised deep learning framework for scene depth estimation, camera motion prediction and dynamic object localization from videos. Consecutive stereo image pairs are used to train the system while only monocular images are needed for inference. The supervisory signals for the training stage come from various forms of image synthesis. Due to the use of consecutive stereo video, both spatial and temporal photometric errors are used to synthesize the images. Furthermore, to relieve the impacts of occlusions, adaptive left-right consistency and forward-backward consistency losses are added to the objective function. Experimental results on the KITTI and Cityscapes datasets demonstrate that our method is more effective in depth estimation, camera motion prediction and dynamic object localization compared to previous models

    Adaptive obstacle detection for mobile robots in urban environments using downward-looking 2D LiDAR

    Get PDF
    Environment perception is important for collision-free motion planning of outdoor mobile robots. This paper presents an adaptive obstacle detection method for outdoor mobile robots using a single downward-looking LiDAR sensor. The method begins by extracting line segments from the raw sensor data, and then estimates the height and the vector of the scanned road surface at each moment. Subsequently, the segments are divided into either road ground or obstacles based on the average height of each line segment and the deviation between the line segment and the road vector estimated from the previous measurements. A series of experiments have been conducted in several scenarios, including normal scenes and complex scenes. The experimental results show that the proposed approach can accurately detect obstacles on roads and could effectively deal with the different heights of obstacles in urban road environments

    Unsupervised framework for depth estimation and camera motion prediction from video

    Get PDF
    Depth estimation from monocular video plays a crucial role in scene perception. The significant drawback of supervised learning models is the need for vast amounts of manually labeled data (ground truth) for training. To overcome this limitation, unsupervised learning strategies without the requirement for ground truth have achieved extensive attention from researchers in the past few years. This paper presents a novel unsupervised framework for estimating single-view depth and predicting camera motion jointly. Stereo image sequences are used to train the model while monocular images are required for inference. The presented framework is composed of two CNNs (depth CNN and pose CNN) which are trained concurrently and tested independently. The objective function is constructed on the basis of the epipolar geometry constraints between stereo image sequences. To improve the accuracy of the model, a left-right consistency loss is added to the objective function. The use of stereo image sequences enables us to utilize both spatial information between stereo images and temporal photometric warp error from image sequences. Experimental results on the KITTI and Cityscapes datasets show that our model not only outperforms prior unsupervised approaches but also achieving better results comparable with several supervised methods. Moreover, we also train our model on the Euroc dataset which is captured in an indoor environment. Experiments in indoor and outdoor scenes are conducted to test the generalization capability of the model

    Multimodal Information Fusion for High-Robustness and Low-Drift State Estimation of UGVs in Diverse Scenes

    Get PDF
    Currently, the autonomous positioning of unmanned ground vehicles (UGVs) still faces the problems of insufficient persistence and poor reliability, especially in the challenging scenarios where satellites are denied, or the sensing modalities such as vision or laser are degraded. Based on multimodal information fusion and failure detection (FD), this article proposes a high-robustness and low-drift state estimation system suitable for multiple scenes, which integrates light detection and ranging (LiDAR), inertial measurement units (IMUs), stereo camera, encoders, attitude and heading reference system (AHRS) in a loose coupling way. Firstly, a state estimator with variable fusion mode is designed based on the error-state extended Kalman filtering (ES-EKF), which can fuse encoder-AHRS subsystem (EAS), visual-inertial subsystem (VIS), and LiDAR subsystem (LS) and change its integration structure online by selecting a fusion mode. Secondly, in order to improve the robustness of the whole system in challenging environments, an information manager is created, which judges the health status of subsystems by degeneration metrics, and then online selects appropriate information sources and variables to enter the estimator according to their health status. Finally, the proposed system is extensively evaluated using the datasets collected from six typical scenes: street, field, forest, forest-at-night, street-at-night and tunnel-at-night. The experimental results show our framework is better or comparable accuracy and robustness than existing publicly available systems

    Robust Kalman Filtering Cooperated Elman Neural Network Learning for Vision-Sensing-Based Robotic Manipulation with Global Stability

    Get PDF
    Fujian Provincial Natural Science Foundation of China [2010J05141]In this paper, a global-state-space visual servoing scheme is proposed for uncalibrated model-independent robotic manipulation. The scheme is based on robust Kalman filtering (KF), in conjunction with Elman neural network (ENN) learning techniques. The global map relationship between the vision space and the robotic workspace is learned using an ENN. This learned mapping is shown to be an approximate estimate of the Jacobian in global space. In the testing phase, the desired Jacobian is arrived at using a robust KF to improve the ENN learning result so as to achieve robotic precise convergence of the desired pose. Meanwhile, the ENN weights are updated (re-trained) using a new input-output data pair vector (obtained from the KF cycle) to ensure robot global stability manipulation. Thus, our method, without requiring either camera or model parameters, avoids the corrupted performances caused by camera calibration and modeling errors. To demonstrate the proposed scheme's performance, various simulation and experimental results have been presented using a six-degree-of-freedom robotic manipulator with eye-in-hand configurations

    Research on Dynamics and Stable Tracking Model of Six-axis Simulation Turntable

    No full text
    A new type of six-axis simulation turntable used to simulate ship swaying is proposed in the paper, and the dynamics and stable tracking model of six-axis simulation turntable are researched. Firstly, according to the design indexes of six-axis simulation/testing turntable, the dynamics of six-axis simulation turntable was established by Lagrange method. Then the dynamics was simulated by software Matlab, which can provide data and theoretical basis for dynamics compensation and driving motors selection of six-axis simulation turntable. Secondly, the stable tracking model of six-axis simulation turntable was established by geometric analysis method, and simulation and reverse checking computation results show that the stable tracking model is correct

    Research on Dynamics and Stable Tracking Model of Six-axis Simulation Turntable

    No full text
    A new type of six-axis simulation turntable used to simulate ship swaying is proposed in the paper, and the dynamics and stable tracking model of six-axis simulation turntable are researched. Firstly, according to the design indexes of six-axis simulation/testing turntable, the dynamics of six-axis simulation turntable was established by Lagrange method. Then the dynamics was simulated by software Matlab, which can provide data and theoretical basis for dynamics compensation and driving motors selection of six-axis simulation turntable. Secondly, the stable tracking model of six-axis simulation turntable was established by geometric analysis method, and simulation and reverse checking computation results show that the stable tracking model is correct

    Severe-Dynamic Tracking Problems Based on Lower Particles Resampling

    No full text
    For a target as it with large-dynamic-change which is still challenging for existing methods to performed robust tracking. The sampling-based Bayesian filtering often suffer from computational complexity associated with large number of particle demanded and weighing multiple hypotheses. Specifically, this work proposes a neural auxiliary Bayesian filtering scheme based on Monte Carlo resampling techniques, which to addresses the computational intensity that is intrinsic to all particle filter, including those have been modified to overcome the degeneracy of particles. Tracking quality for severe-dynamic experiments demonstrate that the neural via compensate the Bayesian filtering error, with high accuracy and intensive tracking performance only require lower particles compare with sequential importance resampling Bayesian filtering, meanwhile, our method also with strong robustness for low number of particles. DOI : http://dx.doi.org/10.11591/telkomnika.v12i6.549

    Superpixel Segmentation Based on Grid Point Density Peak Clustering

    No full text
    Superpixel segmentation is one of the key image preprocessing steps in object recognition and detection methods. However, the over-segmentation in the smoothly connected homogenous region in an image is the key problem. That would produce redundant complex jagged textures. In this paper, the density peak clustering will be used to reduce the redundant superpixels and highlight the primary textures and contours of the salient objects. Firstly, the grid pixels are extracted as feature points, and the density of each feature point will be defined. Secondly, the cluster centers are extracted with the density peaks. Finally, all the feature points will be clustered by the density peaks. The pixel blocks, which are obtained by the above steps, are superpixels. The method is carried out in the BSDS500 dataset, and the experimental results show that the Boundary Recall (BR) and Achievement Segmentation Accuracy (ASA) are 95.0% and 96.3%, respectively. In addition, the proposed method has better performance in efficiency (30 fps). The comparison experiments show that not only do the superpixel boundaries have good adhesion to the primary textures and contours of the salient objects, but they can also effectively reduce the redundant superpixels in the homogeneous region
    corecore